36 research outputs found

    Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections

    Full text link
    Adversarial interactions against politicians on social media such as Twitter have significant impact on society. In particular they disrupt substantive political discussions online, and may discourage people from seeking public office. In this study, we measure the adversarial interactions against candidates for the US House of Representatives during the run-up to the 2018 US general election. We gather a new dataset consisting of 1.7 million tweets involving candidates, one of the largest corpora focusing on political discourse. We then develop a new technique for detecting tweets with toxic content that are directed at any specific candidate.Such technique allows us to more accurately quantify adversarial interactions towards political candidates. Further, we introduce an algorithm to induce candidate-specific adversarial terms to capture more nuanced adversarial interactions that previous techniques may not consider toxic. Finally, we use these techniques to outline the breadth of adversarial interactions seen in the election, including offensive name-calling, threats of violence, posting discrediting information, attacks on identity, and adversarial message repetition

    Bitcoin price prediction using ARIMA and LSTM

    No full text
    The goal of this paper is to compare the accuracy of bitcoin price in USD prediction based on two different model, Long Short term Memory (LSTM) network and ARIMA model. Real-time price data is collected by Pycurl from Bitfine. LSTM model is implemented by Keras and TensorFlow. ARIMA model used in this paper is mainly to present a classical comparison of time series forecasting, as expected, it could make efficient prediction limited in short-time interval, and the outcome depends on the time period. The LSTM could reach a better performance, with extra, indispensable time for model training, especially via CPU

    Characterizing and Mitigating Threats to Trust and Safety Online

    No full text
    146 pagesIn the past decade, social media platforms became increasingly important in everyone's lives. However, the services that they provide are constantly abused by some of their users to create real human harm. Such abusive activities include online harassment, spreading mis/disinformation, producing hate speech, and many others. These harmful user behaviors undermine public trust and may discourage users from engaging with the platforms, with consequences that impact the online information ecosystem and our society as a whole. Therefore, it is critical to understand abuse and design solutions to mitigate these challenges to support trust and safety online. In this dissertation, I discuss my work on characterizing and mitigating abusive behaviors online. To understand such behaviors at the scale of modern-day social media, we need scalable and robust detection methods. However, such methods often fail to address the subtlety of abuse. Taking online harassment as an example, adversaries may use target-specific attacks that are difficult to be spotted by automatic detection algorithms, as these algorithms are trained on a general harassment corpus where such attacks don't exist. We address this issue by using contextually-aware analysis, using the adversarial interactions with U.S. political candidates on Twitter in 2018 as a case study. Further, by combining qualitative and quantitative methods, we analyze the users who engage in the adversarial interactions, showing that some tend to seek out conflicts. While abuse mitigation in public platforms receives more and more attention from both the research community and industry practitioners, the same mitigation strategies are not applicable in private settings. For example, one common practice by public platforms is to scan user communications for known policy-violating content, in order to react to such violations in a timely manner. The direct application of such practice in private settings is forbidden, as it violates user privacy. However, abuse in private communications should not be left unmitigated. To this end, we propose mitigation solutions that enable privacy-preserving client-side detection of content that is similar to known bad content. The proposed protocol reveals the detection result to the client, without notifying the server. The idea is to improve users' agency when facing abuse such as mis/disinformation campaigns, to obtain more context about the content that they receive without sacrificing privacy, and to make informed decisions on their own. To realize this protocol, we present and formalize the concept of similarity-based bucketization, allowing efficient computation on large datasets of known misinformation images

    VoterFraud2020: a Multi-modal Dataset of Election Fraud Claims on Twitter

    No full text
    The wide spread of unfounded election fraud claims surrounding the U.S. 2020 election had resulted in undermining of trust in the election, culminating in violence inside the U.S. capitol. Under these circumstances, it is critical to understand the discussions surrounding these claims on Twitter, a major platform where the claims were disseminated. To this end, we collected and released the VoterFraud2020 dataset, a multi-modal dataset with 7.6M tweets and 25.6M retweets from 2.6M users related to voter fraud claims. To make this data immediately useful for a diverse set of research projects, we further enhance the data with cluster labels computed from the retweet graph, each user's suspension status, and the perceptual hashes of tweeted images. The dataset also includes aggregate data for all external links and YouTube videos that appear in the tweets. Preliminary analyses of the data show that Twitter's user suspension actions mostly affected a specific community of voter fraud claim promoters, and exposes the most common URLs, images and YouTube videos shared in the data

    Socialized Farmland Operation—An Institutional Interpretation of Farmland Scale Management

    No full text
    Farmland scale management is an important approach for developing countries to ensure food security in the face of the COVID-19 pandemic. At present, the realization of farmland scale management through the path of farmland use rights trading encounters obstacles in practice; moreover, the new model of farmland scale management has rarely been systematically discussed. Considering the farmland trusteeship practice implemented in Shandong Province of China as the research case, this study discusses the essence and realization premise of the new farmland scale management model represented by farmland trusteeship based on case analysis. The conclusions are as follows. (1) The high cost generated from farmland scale management is the main obstacle to realize this model. (2) The process of realizing farmland scale management through farmland trusteeship is actually the process of meeting the requirements of the socialization of farmland use, the socialization of the farmland management process, and the socialization of farmland output. Thus, in the context of the existence of a large number of small and scattered farmers in China, the socialized farmland operation is the essence of farmland scale management. (3) Effective collective action is the premise of realizing socialized farmland operation. Undeniably, a lot more systematic explorations are further demanded to strengthen the irrigation management and infrastructures, promote and ensure stable village leadership, and comprehensively improve the ability of rural collective action to ensure the further strengthening of socialized farmland operation so as to realize stable farmland scale management, which will be pursued in the future
    corecore